Goto

Collaborating Authors

 general audience


JRE-L: Journalist, Reader, and Editor LLMs in the Loop for Science Journalism for the General Audience

Jiang, Gongyao, Shi, Xinran, Luo, Qiong

arXiv.org Artificial Intelligence

The journalist's writing is iteratively refined by feedback from the reader and suggestions Figure 1: An article written by a science journalist from the editor. Our experiments demonstrate may be challenging for the general reader without that by leveraging the collaboration of two 7B and one 1.8B open-source LLMs, we the reader's feedback to the editor in the revision can generate articles that are more accessible cycle (a). Incorporating the reader's feedback into the than those generated by existing methods, journalism cycle can help enhance the readability of the including prompting single advanced models article (b). such as GPT-4 and other LLM-collaboration strategies. Our code is publicly available at github.com/Zzoay/JRE-L.


Steering AI-Driven Personalization of Scientific Text for General Audiences

Kim, Taewook, Agarwal, Dhruv, Ackerman, Jordan, Saha, Manaswi

arXiv.org Artificial Intelligence

Digital media platforms (e.g., social media, science blogs) offer opportunities to communicate scientific content to general audiences at scale. However, these audiences vary in their scientific expertise, literacy levels, and personal backgrounds, making effective science communication challenging. To address this challenge, we designed TranSlider, an AI-powered tool that generates personalized translations of scientific text based on individual user profiles (e.g., hobbies, location, and education). Our tool features an interactive slider that allows users to steer the degree of personalization from 0 (weakly relatable) to 100 (strongly relatable), leveraging LLMs to generate the translations with given degrees. Through an exploratory study with 15 participants, we investigated both the utility of these AI-personalized translations and how interactive reading features influenced users' understanding and reading experiences. We found that participants who preferred higher degrees of personalization appreciated the relatable and contextual translations, while those who preferred lower degrees valued concise translations with subtle contextualization. Furthermore, participants reported the compounding effect of multiple translations on their understanding of scientific content. Given these findings, we discuss several implications of AI-personalized translation tools in facilitating communication in collaborative contexts.


LLM-Collaboration on Automatic Science Journalism for the General Audience

Jiang, Gongyao, Shi, Xinran, Luo, Qiong

arXiv.org Artificial Intelligence

Science journalism reports current scientific discoveries to non-specialists, aiming to enable public comprehension of the state of the art. However, this task can be challenging as the audience often lacks specific knowledge about the presented research. To address this challenge, we propose a framework that integrates three LLMs mimicking the real-world writing-reading-feedback-revision workflow, with one LLM acting as the journalist, a smaller LLM as the general public reader, and the third LLM as an editor. The journalist's writing is iteratively refined by feedback from the reader and suggestions from the editor. Our experiments demonstrate that by leveraging the collaboration of two 7B and one 1.8B open-source LLMs, we can generate articles that are more accessible than those generated by existing methods, including advanced models such as GPT-4.


Large Language Model for Causal Decision Making

Jiang, Haitao, Ge, Lin, Gao, Yuhe, Wang, Jianian, Song, Rui

arXiv.org Machine Learning

Large Language Models (LLMs) have shown their success in language understanding and reasoning on general topics. However, their capability to inference based on user-specified structured data and knowledge in corpus-rare concepts like causal decision-making is still limited. In this work, we explore the possibility of fine-tuning an open-sourced LLM into LLM4Causal, which can identify the causal task, execute a corresponding function, and interpret its numerical results based on users' queries and the provided dataset. Meanwhile, we propose a data generation process for more controllable GPT prompting and present two instruction-tuning datasets: (1) Causal-Retrieval-Bench for causal problem identification and input parameter extraction for causal function calling and (2) Causal-Interpret-Bench for in-context causal interpretation. With three case studies, we showed that LLM4Causal can deliver end-to-end solutions for causal problems and provide easy-to-understand answers. Numerical studies also reveal that it has a remarkable ability to identify the correct causal task given a query.


An introduction to science communication at #NeurIPS2023

AIHub

We're pleased to announce that we will be giving a short introduction to science communication for AI researchers at NeurIPS this year. This will be held in person on Monday 11 December from 12:45. If you are attending the conference and fancy finding out how you can communicate your research to a general audience in different formats, then please do join us. Following an hour-long introductory talk, there will be an optional, open, drop-in session where you can try out some of the things you learnt in the course, ask any sci-comm questions, and chat about your ideas and stories. One of the challenges facing the field of AI is its portrayal in the media, which leads to misconceptions among policy makers, business leaders, and the general public alike.


Science communication for AI researchers: our tutorial at #AAAI2023

AIHub

We're pleased to announce that we will be giving a tutorial on science communication for AI researchers at AAAI this year. This will be held in person on Wednesday 8 February (08:30-12:30). If you are attending the conference and fancy finding out how you can communicate your research to a general audience in different formats, then please do join us. The course will be hands-on, so you will need to come prepared with some research in mind that you'd like to communicate about. One of the challenges facing the field of AI is its portrayal in the media, which leads to misconceptions among policy makers, business leaders, and the general public alike.


Content Rating Classification for Fan Fiction

Qiao, Yu, Pope, James

arXiv.org Artificial Intelligence

Content ratings can enable audiences to determine the suitability of various media products. With the recent advent of fan fiction, the critical issue of fan fiction content ratings has emerged. Whether fan fiction content ratings are done voluntarily or required by regulation, there is the need to automate the content rating classification. The problem is to take fan fiction text and determine the appropriate content rating. Methods for other domains, such as online books, have been attempted though none have been applied to fan fiction. We propose natural language processing techniques, including traditional and deep learning methods, to automatically determine the content rating. We show that these methods produce poor accuracy results for multi-classification. We then demonstrate that treating the problem as a binary classification problem produces better accuracy. Finally, we believe and provide some evidence that the current approach of self-annotating has led to incorrect labels limiting classification results.


Science communication for AI researchers – an AIhub tutorial at IJCAI-ECAI 2022

AIHub

We're pleased to announce that we will be giving a tutorial on science communication for AI researchers at IJCAI-ECAI this year. This will be held in person on 25 July (the afternoon session). If you are attending the conference and fancy finding out how you can communicate your research to a general audience in different formats, then please do sign up to join us. One of the challenges facing the field of AI is its portrayal in the media, which leads to misconceptions among policy makers, business leaders, and the general public alike. By communicating about AI in a clear, informed, and measured manner we can help to combat the flow of misinformation and convey the reality of today's technology.


AIhub coffee corner: AI images

AIHub

The representation of AI in the media has long been a problem, with blue brains, white robots, and flying maths – usually completely unrelated to the content of the article – featuring heavily. Not too long ago, the team at Better images of AI released a gallery of free-to-use images which they hope will increase public understanding around the different aspects of AI, and enable more meaningful conversations. Joining the discussion this time are: Sabine Hauert (University of Bristol), Michael Littman (Brown University), Carles Sierra (CSIC), Anna Tahovska (Czech Technical University) and Oskar von Stryk (Technische Universität Darmstadt). Sabine Hauert: There are lots of aspects we can consider when thinking about AI images. For example, how can we source or design better images for AI? How should AI be represented pictorially in articles, blogs etc? What's the problem with images in AI?


How To Paraphrase Text Using Python - AI Summary

#artificialintelligence

As writers, we often seek out tools to help us become more efficient or productive. Tools such as Grammarly can help with language editing. Text generation tools can help to rapidly generate original contents by just giving the AI a few keyword ideas to work with. Perhaps this could help end writer's block? This is a debatable question that is best saved for a later time.